impact duration
Pre-Finetuning with Impact Duration Awareness for Stock Movement Prediction
Chiu, Chr-Jr, Chen, Chung-Chi, Huang, Hen-Hsen, Chen, Hsin-Hsi
Understanding the duration of news events' impact on the stock market is crucial for effective time-series forecasting, yet this facet is largely overlooked in current research. This paper addresses this research gap by introducing a novel dataset, the Impact Duration Estimation Dataset (IDED), specifically designed to estimate impact duration based on investor opinions. Our research establishes that pre-finetuning language models with IDED can enhance performance in text-based stock movement predictions. In addition, we juxtapose our proposed pre-finetuning task with sentiment analysis pre-finetuning, further affirming the significance of learning impact duration. Our findings highlight the promise of this novel research direction in stock movement prediction, offering a new avenue for financial forecasting. We also provide the IDED and pre-finetuned language models under the CC BY-NC-SA 4.0 license for academic use, fostering further exploration in this field.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Asia > Taiwan (0.05)
- Oceania > Australia > Victoria > Melbourne (0.04)
- (4 more...)
Jetsons at FinNLP 2024: Towards Understanding the ESG Impact of a News Article using Transformer-based Models
Dakle, Parag Pravin, Gon, Alolika, Zha, Sihan, Wang, Liang, Rallabandi, SaiKrishna, Raghavan, Preethi
In this paper, we describe the different approaches explored by the Jetsons team for the Multi-Lingual ESG Impact Duration Inference (ML-ESG-3) shared task. The shared task focuses on predicting the duration and type of the ESG impact of a news article. The shared task dataset consists of 2,059 news titles and articles in English, French, Korean, and Japanese languages. For the impact duration classification task, we fine-tuned XLM-RoBERTa with a custom fine-tuning strategy and using self-training and DeBERTa-v3 using only English translations. These models individually ranked first on the leaderboard for Korean and Japanese and in an ensemble for the English language, respectively. For the impact type classification task, our XLM-RoBERTa model fine-tuned using a custom fine-tuning strategy ranked first for the English language.
- Europe > Slovakia > Bratislava > Bratislava (0.04)
- Asia > Macao (0.04)
- Banking & Finance (0.69)
- Energy (0.47)
- Information Technology > Artificial Intelligence > Natural Language > Question Answering (0.46)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.40)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.40)
- Information Technology > Artificial Intelligence > Natural Language > Text Classification (0.34)
ESG Classification by Implicit Rule Learning via GPT-4
Yun, Hyo Jeong, Kim, Chanyoung, Hahm, Moonjeong, Kim, Kyuri, Son, Guijin
Environmental, social, and governance (ESG) factors are widely adopted as higher investment return indicators. Accordingly, ongoing efforts are being made to automate ESG evaluation with language models to extract signals from massive web text easily. However, recent approaches suffer from a lack of training data, as rating agencies keep their evaluation metrics confidential. This paper investigates whether state-of-the-art language models like GPT-4 can be guided to align with unknown ESG evaluation criteria through strategies such as prompting, chain-of-thought reasoning, and dynamic in-context learning. We demonstrate the efficacy of these approaches by ranking 2nd in the Shared-Task ML-ESG-3 Impact Type track for Korean without updating the model on the provided training data. We also explore how adjusting prompts impacts the ability of language models to address financial tasks leveraging smaller models with openly available weights. We observe longer general pre-training to correlate with enhanced performance in financial downstream tasks. Our findings showcase the potential of language models to navigate complex, subjective evaluation guidelines despite lacking explicit training examples, revealing opportunities for training-free solutions for financial downstream tasks.